16 research outputs found
The Behavior of Epidemics under Bounded Susceptibility
We investigate the sensitivity of epidemic behavior to a bounded
susceptibility constraint -- susceptible nodes are infected by their neighbors
via the regular SI/SIS dynamics, but subject to a cap on the infection rate.
Such a constraint is motivated by modern social networks, wherein messages are
broadcast to all neighbors, but attention spans are limited. Bounded
susceptibility also arises in distributed computing applications with download
bandwidth constraints, and in human epidemics under quarantine policies.
Network epidemics have been extensively studied in literature; prior work
characterizes the graph structures required to ensure fast spreading under the
SI dynamics, and long lifetime under the SIS dynamics. In particular, these
conditions turn out to be meaningful for two classes of networks of practical
relevance -- dense, uniform (i.e., clique-like) graphs, and sparse, structured
(i.e., star-like) graphs. We show that bounded susceptibility has a surprising
impact on epidemic behavior in these graph families. For the SI dynamics,
bounded susceptibility has no effect on star-like networks, but dramatically
alters the spreading time in clique-like networks. In contrast, for the SIS
dynamics, clique-like networks are unaffected, but star-like networks exhibit a
sharp change in extinction times under bounded susceptibility.
Our findings are useful for the design of disease-resistant networks and
infrastructure networks. More generally, they show that results for existing
epidemic models are sensitive to modeling assumptions in non-intuitive ways,
and suggest caution in directly using these as guidelines for real systems
Recommended from our members
Online learning and decision-making from implicit feedback
This thesis focuses on designing learning and control algorithms for emerging resource allocation platforms like recommender systems, 5G wireless networks, and online marketplaces. These systems have an environment which is only partially known. Thus, the controllers need to make resource allocation decisions based on implicit feedback obtained from the environment based on past actions. The goal is to sequentially select actions using incremental feedback so as to optimize performance while simultaneously learning about the environment. We study three problems which exemplify this setting. The first is an inference problem which requires identification of sponsored content in recommender systems. Specifically, we ask if it is possible to detect the existence of sponsored content disguised as genuine recommendations using implicit feedback from a subset of users of the recommender system. The second problem is the design of scheduling algorithms for switch networks when the user-server link statistics are unknown (for e.g., in wireless networks, online marketplaces). The scheduling algorithm has to tradeoff between scheduling the optimal links and obtaining sufficient feedback about all the links for accurate estimates. We observe the close connection of this problem to the stochastic multi-armed bandit problem and analyze bandit-style explore-exploit algorithms for learning the statistical parameters while simultaneously assigning servers to users. The third is the joint problem of base station activation and rate allocation in an energy efficient wireless network when the channel statistics are unknown. The controller observes instantaneous channel rates of activated BSs, and thereby sequentially obtains implicit feedback about the channel. Here again, there is a tradeoff between learning the channel versus optimizing the operation cost based on estimated parameters. For each of these systems, we propose algorithms with provable asymptotic guarantees. These learning algorithms highlight the use of implicit feedback in online decision making and control.Electrical and Computer Engineerin
Modeling the effect of transmission errors on TCP controlled transfers over infrastructure 802.11 wireless LANs
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations
Spectrum Sharing and Scheduling in D2D-Enabled Dense Cellular Networks
Abstract-We study device-to-device (D2D) enabled hierarchical cellular networks consisting of a macro base station (BS), a dense network of access nodes (ANs) and mobile users, where spectrum is shared between cellular traffic and D2D traffic. Further, (the receivers of) mobile users dynamically time-share between the cellular and D2D networks. We develop algorithms for channel allocation and mobile-user receiver mode selection (choosing which network to participate in) with the objectives of minimizing delay for cellular traffic, and capacity maximization for D2D traffic. Our proposed solution takes advantage of the unique features offered by large and densified cellular networks such as multi-point connectivity, channel diversity, spatial reuse and load distribution. Given a BS-to-mobile delay requirement of d + 1 time-slots, we show that by appropriately scheduling channels and receiver modes, we can (with exponentially high probability) guarantee that cellular traffic reaches its intended destination within d timeslots. By leveraging spatial channel reuse, we show that this is achieved by utilizing a vanishingly small fraction of the available spatial capacity. Further, in the presence of delay-constrained cellular traffic, our scheduling algorithm guarantees D2D traffic can achieve rates within a (1 − 1 d ) factor of the corresponding achievable rates without cellular traffic